Skip to content

Conversation

@nimrod-teich
Copy link
Contributor

Remove the availability degradation mechanism that penalized providers when node errors occurred alongside successful responses. This mechanism was causing unnecessary provider score degradation for expected node behavior like Solana skipped slots.

Changes:

  • Remove availabilityDegrader field from RelayProcessor
  • Remove QoSAvailabilityDegrader interface (no longer needed)
  • Remove shouldDegradeAvailability logic from ProcessingResult()
  • Update all NewRelayProcessor callers to remove the parameter

Description

Closes: #XXXX


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • read the contribution guide
  • included the correct type prefix in the PR title, you can find examples of the prefixes below:
  • confirmed ! in the type prefix if API or client breaking change
  • targeted the main branch
  • provided a link to the relevant issue or specification
  • reviewed "Files changed" and left comments if necessary
  • included the necessary unit and integration tests
  • updated the relevant documentation or specification, including comments for documenting Go code
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic, API design and naming, documentation is accurate, tests and test coverage

NadavLevi
NadavLevi previously approved these changes Jan 27, 2026
@nimrod-teich nimrod-teich force-pushed the fix/remove-availability-degradation-on-node-errors branch from 384f86d to 110fd13 Compare January 27, 2026 16:16
@github-actions
Copy link

github-actions bot commented Jan 27, 2026

Test Results

    7 files  ±0     85 suites  ±0   29m 55s ⏱️ -38s
3 337 tests ±0  3 336 ✅ ±0  1 💤 ±0  0 ❌ ±0 
3 530 runs  ±0  3 529 ✅ ±0  1 💤 ±0  0 ❌ ±0 

Results for commit a9fa436. ± Comparison against base commit 3a59b97.

♻️ This comment has been updated with latest results.

@codecov
Copy link

codecov bot commented Jan 27, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

Flag Coverage Δ
consensus 8.55% <ø> (ø)
protocol 33.87% <ø> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
protocol/relaycore/interfaces.go 0.00% <ø> (ø)
protocol/relaycore/relay_processor.go 51.57% <ø> (+1.32%) ⬆️
protocol/rpcconsumer/rpcconsumer_server.go 30.31% <ø> (-0.04%) ⬇️
protocol/rpcsmartrouter/rpcsmartrouter_server.go 31.63% <ø> (-0.03%) ⬇️

... and 3 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

NadavLevi
NadavLevi previously approved these changes Jan 29, 2026
Remove the availability degradation mechanism that penalized providers
when node errors occurred alongside successful responses. This mechanism
was causing unnecessary provider score degradation for expected node
behavior like Solana skipped slots.

Changes:
- Remove availabilityDegrader field from RelayProcessor
- Remove QoSAvailabilityDegrader interface (no longer needed)
- Remove shouldDegradeAvailability logic from ProcessingResult()
- Update all NewRelayProcessor callers to remove the parameter
@nimrod-teich nimrod-teich force-pushed the fix/remove-availability-degradation-on-node-errors branch from 110fd13 to a9fa436 Compare January 29, 2026 11:25
@nimrod-teich nimrod-teich merged commit fbf2ce0 into main Jan 29, 2026
31 checks passed
@nimrod-teich nimrod-teich deleted the fix/remove-availability-degradation-on-node-errors branch January 29, 2026 16:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants